翻訳と辞書
Words near each other
・ Bayesian average
・ Bayesian classifier
・ Bayesian cognitive science
・ Bayesian econometrics
・ Bayesian efficiency
・ Bayesian experimental design
・ Bayesian filtering
・ Bayesian Filtering Library
・ Bayesian game
・ Bayesian hierarchical modeling
・ Bayesian inference
・ Bayesian inference in marketing
・ Bayesian inference in motor learning
・ Bayesian inference in phylogeny
・ Bayesian inference using Gibbs sampling
Bayesian information criterion
・ Bayesian interpretation of kernel regularization
・ Bayesian Knowledge Tracing
・ Bayesian linear regression
・ Bayesian multivariate linear regression
・ Bayesian network
・ Bayesian Operational Modal Analysis
・ Bayesian optimization
・ Bayesian poisoning
・ Bayesian probability
・ Bayesian programming
・ Bayesian search theory
・ Bayesian statistics
・ Bayesian tool for methylation analysis
・ Bayesian vector autoregression


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bayesian information criterion : ウィキペディア英語版
Bayesian information criterion

In statistics, the Bayesian information criterion (BIC) or Schwarz criterion (also SBC, SBIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).
When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC.
The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,〔.〕 where he gave a Bayesian argument for adopting it.
== Definition ==
The BIC is formally defined as
: \mathrm = . \
where
*x = the observed data;
*\theta = the parameters of the model;
*n = the number of data points in x, the number of observations, or equivalently, the sample size;
*k = the number of free parameters to be estimated. If the model under consideration is a linear regression, k is the number of regressors, including the intercept;
*\hat L = the maximized value of the likelihood function of the model M, i.e. \hat L=p(x|\hat\theta,M), where \hat\theta are the parameter values that maximize the likelihood function.
The BIC is an asymptotic result derived under the assumptions that the data distribution is in the exponential family.
That is, the integral of the likelihood function p(x|\theta,M) times the prior probability distribution p(\theta|M) over the parameters \theta of the model M for fixed observed data x
is approximated as
: = . \
For large n, this can be approximated by the formula given above.
The BIC is used in model selection problems where adding a constant to the BIC does not change the result.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bayesian information criterion」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.